-
Notifications
You must be signed in to change notification settings - Fork 31.1k
Add Qwen2 GGUF loading support #31175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
younesbelkada
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this great contribution ! Can you confirm the other slow tests pass ? 🙏 I left few minor comments, what do you think?
younesbelkada
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work ! Thanks for adding Qwen2 support for GGUF files ! Can you run the styling checks? make fixup after that this PR is ready IMO
amyeroberts
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding!
What does this PR do?
model_typefor gguf tokenizer converter selection instead oftokenizer_typeAccording to
convert-hf-to-gguf.py, most of models may register tokenizer asgpt2tokenizer. Usemodel_typeto select corresponding tokenizer instead oftokenizer_type.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.